skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Barnes, Elizabeth A"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Subseasonal‐to‐decadal atmospheric prediction skill attained from initial conditions is typically limited by the chaotic nature of the atmosphere. However, for some atmospheric phenomena, prediction skill on subseasonal‐to‐decadal timescales is increased when the initial conditions are in a particular state. In this study, we employ machine learning to identify sea surface temperature (SST) regimes that enhance prediction skill of North Atlantic atmospheric circulation. An ensemble of artificial neural networks is trained to predict anomalous, low‐pass filtered 500 mb height at 7–8 weeks lead using SST. We then use self‐organizing maps (SOMs) constructed from 9 regions within the SST domain to detect state‐dependent prediction skill. SOMs are built using the entire SST time series, and we assess which SOM units feature confident neural network predictions. Four regimes are identified that provide skillful seasonal predictions of 500 mb height. Our findings demonstrate the importance of extratropical decadal SST variability in modulating downstream ENSO teleconnections to the North Atlantic. The methodology presented could aid future forecasting on subseasonal‐to‐decadal timescales. 
    more » « less
    Free, publicly-accessible full text available April 28, 2026
  2. The observed increase in extreme weather has prompted recent methodological advances in extreme event attribution. We propose a machine learning–based approach that uses convolutional neural networks to create dynamically consistent counterfactual versions of historical extreme events under different levels of global mean temperature (GMT). We apply this technique to one recent extreme heat event (southcentral North America 2023) and several historical events that have been previously analyzed using established attribution methods. We estimate that temperatures during the southcentral North America event were 1.18° to 1.42°C warmer because of global warming and that similar events will occur 0.14 to 0.60 times per year at 2.0°C above preindustrial levels of GMT. Additionally, we find that the learned relationships between daily temperature and GMT are influenced by the seasonality of the forced temperature response and the daily meteorological conditions. Our results broadly agree with other attribution techniques, suggesting that machine learning can be used to perform rapid, low-cost attribution of extreme events. 
    more » « less
  3. Males, Jamie (Ed.)
  4. Abstract Earth system models are powerful tools to simulate the climate response to hypothetical climate intervention strategies, such as stratospheric aerosol injection (SAI). Recent simulations of SAI implement a tool from control theory, called a controller, to determine the quantity of aerosol to inject into the stratosphere to reach or maintain specified global temperature targets, such as limiting global warming to 1.5°C above pre‐industrial temperatures. This work explores how internal (unforced) climate variability can impact controller‐determined injection amounts using the Assessing Responses and Impacts of Solar climate intervention on the Earth system with Stratospheric Aerosol Injection (ARISE‐SAI) simulations. Since the ARISE‐SAI controller determines injection amounts by comparing global annual‐mean surface temperature to predetermined temperature targets, internal variability that impacts temperature can impact the total injection amount as well. Using an offline version of the ARISE‐SAI controller and data from Earth system model simulations, we quantify how internal climate variability and volcanic eruptions impact injection amounts. While idealized, this approach allows for the investigation of a large variety of climate states without additional simulations and can be used to attribute controller sensitivities to specific modes of internal variability. 
    more » « less
  5. Abstract A simple method for adding uncertainty to neural network regression tasks in earth science via estimation of a general probability distribution is described. Specifically, we highlight the sinh-arcsinh-normal distributions as particularly well suited for neural network uncertainty estimation. The methodology supports estimation of heteroscedastic, asymmetric uncertainties by a simple modification of the network output and loss function. Method performance is demonstrated by predicting tropical cyclone intensity forecast uncertainty and by comparing two other common methods for neural network uncertainty quantification (i.e., Bayesian neural networks and Monte Carlo dropout). The simple approach described here is intuitive and applicable when no prior exists and one just wishes to parameterize the output and its uncertainty according to some previously defined family of distributions. The authors believe it will become a powerful, go-to method moving forward. 
    more » « less
  6. Abstract Convolutional neural networks (CNNs) have recently attracted great attention in geoscience due to their ability to capture non-linear system behavior and extract predictive spatiotemporal patterns. Given their black-box nature however, and the importance of prediction explainability, methods of explainable artificial intelligence (XAI) are gaining popularity as a means to explain the CNN decision-making strategy. Here, we establish an intercomparison of some of the most popular XAI methods and investigate their fidelity in explaining CNN decisions for geoscientific applications. Our goal is to raise awareness of the theoretical limitations of these methods and gain insight into the relative strengths and weaknesses to help guide best practices. The considered XAI methods are first applied to an idealized attribution benchmark, where the ground truth of explanation of the network is known a priori , to help objectively assess their performance. Secondly, we apply XAI to a climate-related prediction setting, namely to explain a CNN that is trained to predict the number of atmospheric rivers in daily snapshots of climate simulations. Our results highlight several important issues of XAI methods (e.g., gradient shattering, inability to distinguish the sign of attribution, ignorance to zero input) that have previously been overlooked in our field and, if not considered cautiously, may lead to a distorted picture of the CNN decision-making strategy. We envision that our analysis will motivate further investigation into XAI fidelity and will help towards a cautious implementation of XAI in geoscience, which can lead to further exploitation of CNNs and deep learning for prediction problems. 
    more » « less
  7. Abstract Assessing forced climate change requires the extraction of the forced signal from the background of climate noise. Traditionally, tools for extracting forced climate change signals have focused on one atmospheric variable at a time, however, using multiple variables can reduce noise and allow for easier detection of the forced response. Following previous work, we train artificial neural networks to predict the year of single‐ and multi‐variable maps from forced climate model simulations. To perform this task, the neural networks learn patterns that allow them to discriminate between maps from different years—that is, the neural networks learn the patterns of the forced signal amidst the shroud of internal variability and climate model disagreement. When presented with combined input fields (multiple seasons, variables, or both), the neural networks are able to detect the signal of forced change earlier than when given single fields alone by utilizing complex, nonlinear relationships between multiple variables and seasons. We use layer‐wise relevance propagation, a neural network explainability tool, to identify the multivariate patterns learned by the neural networks that serve as reliable indicators of the forced response. These “indicator patterns” vary in time and between climate models, providing a template for investigating inter‐model differences in the time evolution of the forced response. This work demonstrates how neural networks and their explainability tools can be harnessed to identify patterns of the forced signal within combined fields. 
    more » « less
  8. Abstract We develop and demonstrate a new interpretable deep learning model specifically designed for image analysis in Earth system science applications. The neural network is designed to be inherently interpretable, rather than explained via post hoc methods. This is achieved by training the network to identify parts of training images that act as prototypes for correctly classifying unseen images. The new network architecture extends the interpretable prototype architecture of a previous study in computer science to incorporate absolute location. This is useful for Earth system science where images are typically the result of physics-based processes, and the information is often geolocated. Although the network is constrained to only learn via similarities to a small number of learned prototypes, it can be trained to exhibit only a minimal reduction in accuracy relative to noninterpretable architectures. We apply the new model to two Earth science use cases: a synthetic dataset that loosely represents atmospheric high and low pressure systems, and atmospheric reanalysis fields to identify the state of tropical convective activity associated with the Madden–Julian oscillation. In both cases, we demonstrate that considering absolute location greatly improves testing accuracies when compared with a location-agnostic method. Furthermore, the network architecture identifies specific historical dates that capture multivariate, prototypical behavior of tropical climate variability. Significance StatementMachine learning models are incredibly powerful predictors but are often opaque “black boxes.” The how-and-why the model makes its predictions is inscrutable—the model is not interpretable. We introduce a new machine learning model specifically designed for image analysis in Earth system science applications. The model is designed to be inherently interpretable and extends previous work in computer science to incorporate location information. This is important because images in Earth system science are typically the result of physics-based processes, and the information is often map based. We demonstrate its use for two Earth science use cases and show that the interpretable network exhibits only a small reduction in accuracy relative to black-box models. 
    more » « less
  9. Abstract Despite the increasingly successful application of neural networks to many problems in the geosciences, their complex and nonlinear structure makes the interpretation of their predictions difficult, which limits model trust and does not allow scientists to gain physical insights about the problem at hand. Many different methods have been introduced in the emerging field of eXplainable Artificial Intelligence (XAI), which aims at attributing the network’s prediction to specific features in the input domain. XAI methods are usually assessed by using benchmark datasets (such as MNIST or ImageNet for image classification). However, an objective, theoretically derived ground truth for the attribution is lacking for most of these datasets, making the assessment of XAI in many cases subjective. Also, benchmark datasets specifically designed for problems in geosciences are rare. Here, we provide a framework, based on the use of additively separable functions, to generate attribution benchmark datasets for regression problems for which the ground truth of the attribution is known a priori. We generate a large benchmark dataset and train a fully connected network to learn the underlying function that was used for simulation. We then compare estimated heatmaps from different XAI methods to the ground truth in order to identify examples where specific XAI methods perform well or poorly. We believe that attribution benchmarks as the ones introduced herein are of great importance for further application of neural networks in the geosciences, and for more objective assessment and accurate implementation of XAI methods, which will increase model trust and assist in discovering new science. 
    more » « less